This article presents a novel method to obtain a sparse representation of multiview images. The method is based\r\non the fact that multiview data is composed of epipolar-plane image lines which are highly redundant. We extend\r\nthis principle to obtain the layer-based representation, which partitions a multiview image dataset into redundant\r\nregions (which we call layers) each related to a constant depth in the observed scene. The layers are extracted\r\nusing a general segmentation framework which takes into account the camera setup and occlusion constraints. To\r\nobtain a sparse representation, the extracted layers are further decomposed using a multidimensional discrete\r\nwavelet transform (DWT), first across the view domain followed by a two-dimensional (2D) DWT applied to the\r\nimage dimensions. We modify the viewpoint DWT to take into account occlusions and scene depth variations.\r\nSimulation results based on nonlinear approximation show that the sparsity of our representation is superior to the\r\nmulti-dimensional DWT without disparity compensation. In addition we demonstrate that the constant depth\r\nmodel of the representation can be used to synthesise novel viewpoints for immersive viewing applications and\r\nalso de-noise multiview images.
Loading....